现实世界优化问题可能具有不同的基础结构。在黑盒优化中,决策变量之间的依赖关系仍然未知。但是,某些技术可以准确发现此类相互作用。在大规模的全球优化(LSGO)中,问题是高维的。显示出将LSGO问题分解为子问题并分别优化它们有效。这种方法的有效性可能高度取决于问题分解的准确性。许多最新的分解策略来自差分组(DG)。但是,如果给定的问题由不可分离的子问题组成,则它们仅检测真实相互作用的能力可能会大大减少。因此,我们提出了不遭受此缺陷的增量递归排名分组(IRRG)。 IRRG比最近基于DG的命题(例如递归DG 3(RDG3))消耗更多的健身功能评估。然而,对于适合RDG3的可添加性可分离子问题而言,嵌入IRRG或RDG3后所考虑的合作共同进化框架的有效性相似。但是,在用非添加的嵌入IRRG代替可分离性后,IRRG会导致质量明显更高的结果。
translated by 谷歌翻译
In fighting games, individual players of the same skill level often exhibit distinct strategies from one another through their gameplay. Despite this, the majority of AI agents for fighting games have only a single strategy for each "level" of difficulty. To make AI opponents more human-like, we'd ideally like to see multiple different strategies at each level of difficulty, a concept we refer to as "multidimensional" difficulty. In this paper, we introduce a diversity-based deep reinforcement learning approach for generating a set of agents of similar difficulty that utilize diverse strategies. We find this approach outperforms a baseline trained with specialized, human-authored reward functions in both diversity and performance.
translated by 谷歌翻译